SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The Fruits and Vegetables Image Recognition dataset is a multi-class classification situation where we attempt to predict one of several (more than two) possible outcomes.
INTRODUCTION: The dataset owner collected over 4,300 pieces of fruit and vegetable images and created a dataset that includes 36 classes. The idea was to build an application that recognizes the food items from the captured photo and provides different recipes that can be made using the food items.
ANALYSIS: The MobileNetV3Large model's performance achieved an accuracy score of 93.73% after 40 epochs using a separate validation dataset. After tuning the learning rate, we improved the accuracy rate to 95.44% using the same validation dataset. When we applied the model to the test dataset, the model achieved an accuracy score of 93.03%.
CONCLUSION: In this iteration, the TensorFlow MobileNetV3Large CNN model appeared suitable for modeling this dataset.
Dataset ML Model: Multi-Class classification with numerical features
Dataset Used: Kritik Seth, "Fruits and Vegetables Image Recognition Dataset," Kaggle 2020
Dataset Reference: https://www.kaggle.com/datasets/kritikseth/fruit-and-vegetable-image-recognition
One source of potential performance benchmarks: https://www.kaggle.com/datasets/kritikseth/fruit-and-vegetable-image-recognition/code
# # Install the packages to support accessing environment variable and SQL databases
# !pip install python-dotenv PyMySQL boto3
# Retrieve CPU information from the system
ncpu = !nproc
print("The number of available CPUs is:", ncpu[0])
The number of available CPUs is: 12
# Retrieve memory configuration information
from psutil import virtual_memory
ram_gb = virtual_memory().total / 1e9
print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(ram_gb))
Your runtime has 89.6 gigabytes of available RAM
# Retrieve GPU configuration information
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
print(gpu_info)
Sat Apr 9 12:50:51 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB Off | 00000000:00:04.0 Off | 0 |
| N/A 31C P0 42W / 400W | 0MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
# # Mount Google Drive locally for loading the dotenv files
# from dotenv import load_dotenv
# from google.colab import drive
# drive.mount('/content/gdrive')
# gdrivePrefix = '/content/gdrive/My Drive/Colab_Downloads/'
# env_path = '/content/gdrive/My Drive/Colab Notebooks/'
# dotenv_path = env_path + "python_script.env"
# load_dotenv(dotenv_path=dotenv_path)
# Set the random seed number for reproducible results
RNG_SEED = 888
import random
random.seed(RNG_SEED)
import numpy as np
np.random.seed(RNG_SEED)
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
import sys
import math
# import boto3
import zipfile
from datetime import datetime
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
import tensorflow as tf
tf.random.set_seed(RNG_SEED)
from tensorflow import keras
from tensorflow.keras.callbacks import ReduceLROnPlateau
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Begin the timer for the script processing
START_TIME_SCRIPT = datetime.now()
# Set up the number of CPU cores available for multi-thread processing
N_JOBS = 1
# Set up the flag to stop sending progress emails (setting to True will send status emails!)
NOTIFY_STATUS = False
# Set the percentage sizes for splitting the dataset
TEST_SET_RATIO = 0.2
VAL_SET_RATIO = 0.2
# Set the number of folds for cross validation
N_FOLDS = 5
N_ITERATIONS = 1
# Set various default modeling parameters
DEFAULT_LOSS = 'categorical_crossentropy'
DEFAULT_METRICS = ['accuracy']
INITIAL_LR = 0.0001
DEFAULT_OPTIMIZER = tf.keras.optimizers.Adam(learning_rate=INITIAL_LR)
CLASSIFIER_ACTIVATION = 'softmax'
MAX_EPOCHS = 40
BATCH_SIZE = 16
NUM_CLASSES = 36
# CLASS_LABELS = []
# CLASS_NAMES = []
# RAW_IMAGE_SIZE = (250, 250)
TARGET_IMAGE_SIZE = (224, 224)
INPUT_IMAGE_SHAPE = (TARGET_IMAGE_SIZE[0], TARGET_IMAGE_SIZE[1], 3)
# Define the labels to use for graphing the data
TRAIN_METRIC = "accuracy"
VALIDATION_METRIC = "val_accuracy"
TRAIN_LOSS = "loss"
VALIDATION_LOSS = "val_loss"
# Define the directory locations and file names
STAGING_DIR = 'staging/'
TRAIN_DIR = 'staging/train/'
VALID_DIR = 'staging/validation/'
TEST_DIR = 'staging/test/'
TRAIN_DATASET = 'archive.zip'
# VALID_DATASET = ''
# TEST_DATASET = ''
# TRAIN_LABELS = ''
# VALID_LABELS = ''
# TEST_LABELS = ''
# OUTPUT_DIR = 'staging/'
# SAMPLE_SUBMISSION_CSV = 'sample_submission.csv'
# FINAL_SUBMISSION_CSV = 'submission.csv'
# Check the number of GPUs accessible through TensorFlow
print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))
# Print out the TensorFlow version for confirmation
print('TensorFlow version:', tf.__version__)
Num GPUs Available: 1 TensorFlow version: 2.8.0
# Set up the email notification function
def status_notify(msg_text):
access_key = os.environ.get('SNS_ACCESS_KEY')
secret_key = os.environ.get('SNS_SECRET_KEY')
aws_region = os.environ.get('SNS_AWS_REGION')
topic_arn = os.environ.get('SNS_TOPIC_ARN')
if (access_key is None) or (secret_key is None) or (aws_region is None):
sys.exit("Incomplete notification setup info. Script Processing Aborted!!!")
sns = boto3.client('sns', aws_access_key_id=access_key, aws_secret_access_key=secret_key, region_name=aws_region)
response = sns.publish(TopicArn=topic_arn, Message=msg_text)
if response['ResponseMetadata']['HTTPStatusCode'] != 200 :
print('Status notification not OK with HTTP status code:', response['ResponseMetadata']['HTTPStatusCode'])
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 1 - Prepare Environment has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 1 - Prepare Environment completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 2 - Load and Prepare Images has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
# Clean up the old files and download directories before receiving new ones
!rm -rf staging/
# !rm archive.zip
!mkdir staging/
if not os.path.exists(TRAIN_DATASET):
!wget https://dainesanalytics.com/datasets/kaggle-kritikseth-fruit-vegetable-image/archive.zip
--2022-04-09 12:50:54-- https://dainesanalytics.com/datasets/kaggle-kritikseth-fruit-vegetable-image/archive.zip Resolving dainesanalytics.com (dainesanalytics.com)... 18.67.0.27, 18.67.0.24, 18.67.0.19, ... Connecting to dainesanalytics.com (dainesanalytics.com)|18.67.0.27|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 2130757290 (2.0G) [application/zip] Saving to: ‘archive.zip’ archive.zip 100%[===================>] 1.98G 153MB/s in 13s 2022-04-09 12:51:08 (151 MB/s) - ‘archive.zip’ saved [2130757290/2130757290]
zip_ref = zipfile.ZipFile(TRAIN_DATASET, 'r')
zip_ref.extractall(STAGING_DIR)
zip_ref.close()
CLASS_LABELS = os.listdir(TRAIN_DIR)
print(CLASS_LABELS)
['onion', 'tomato', 'turnip', 'apple', 'pomegranate', 'sweetcorn', 'ginger', 'peas', 'lettuce', 'garlic', 'watermelon', 'potato', 'paprika', 'eggplant', 'carrot', 'bell pepper', 'sweetpotato', 'jalepeno', 'orange', 'pineapple', 'soy beans', 'lemon', 'kiwi', 'chilli pepper', 'cucumber', 'raddish', 'cabbage', 'spinach', 'mango', 'pear', 'capsicum', 'beetroot', 'grapes', 'corn', 'cauliflower', 'banana']
# Brief listing of training image files for each class
for c_label in CLASS_LABELS:
training_class_dir = os.path.join(TRAIN_DIR, c_label)
training_class_files = os.listdir(training_class_dir)
print('Number of training images for', c_label, ':', len(os.listdir(training_class_dir)))
print('Training samples for', c_label, ':', training_class_files[:5],'\n')
Number of training images for onion : 94 Training samples for onion : ['Image_84.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_94.png'] Number of training images for tomato : 92 Training samples for tomato : ['Image_84.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_80.png'] Number of training images for turnip : 98 Training samples for turnip : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg'] Number of training images for apple : 68 Training samples for apple : ['Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_87.jpg'] Number of training images for pomegranate : 79 Training samples for pomegranate : ['Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_80.png'] Number of training images for sweetcorn : 91 Training samples for sweetcorn : ['Image_84.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_76.jpg'] Number of training images for ginger : 68 Training samples for ginger : ['Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_7.jpg'] Number of training images for peas : 100 Training samples for peas : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg'] Number of training images for lettuce : 97 Training samples for lettuce : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg'] Number of training images for garlic : 92 Training samples for garlic : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg'] Number of training images for watermelon : 84 Training samples for watermelon : ['Image_84.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_89.png'] Number of training images for potato : 77 Training samples for potato : ['Image_64.jpg', 'Image_1.jpg', 'Image_3.jpg', 'Image_80.png', 'Image_76.jpg'] Number of training images for paprika : 83 Training samples for paprika : ['Image_14.png', 'Image_85.jpeg', 'Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg'] Number of training images for eggplant : 84 Training samples for eggplant : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_3.jpg', 'Image_87.jpg'] Number of training images for carrot : 82 Training samples for carrot : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg'] Number of training images for bell pepper : 90 Training samples for bell pepper : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_41.png'] Number of training images for sweetpotato : 69 Training samples for sweetpotato : ['Image_84.jpg', 'Image_64.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_76.jpg'] Number of training images for jalepeno : 88 Training samples for jalepeno : ['Image_84.jpg', 'Image_64.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_87.jpg'] Number of training images for orange : 69 Training samples for orange : ['Image_40.jpg', 'Image_3.jpg', 'Image_76.jpg', 'Image_87.jpg', 'Image_7.jpg'] Number of training images for pineapple : 99 Training samples for pineapple : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg'] Number of training images for soy beans : 97 Training samples for soy beans : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg'] Number of training images for lemon : 82 Training samples for lemon : ['Image_84.jpg', 'Image_3.jpg', 'Image_8.png', 'Image_7.jpg', 'Image_31.jpg'] Number of training images for kiwi : 88 Training samples for kiwi : ['Image_84.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_76.jpg'] Number of training images for chilli pepper : 87 Training samples for chilli pepper : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg'] Number of training images for cucumber : 94 Training samples for cucumber : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg'] Number of training images for raddish : 81 Training samples for raddish : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg'] Number of training images for cabbage : 92 Training samples for cabbage : ['Image_1.jpg', 'Image_26.JPG', 'Image_3.jpg', 'Image_76.jpg', 'Image_87.jpg'] Number of training images for spinach : 97 Training samples for spinach : ['Image_84.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_80.png'] Number of training images for mango : 86 Training samples for mango : ['Image_64.jpg', 'Image_1.jpg', 'Image_3.jpg', 'Image_87.jpg', 'Image_7.jpg'] Number of training images for pear : 89 Training samples for pear : ['Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_76.jpg'] Number of training images for capsicum : 89 Training samples for capsicum : ['Image_96.JPG', 'Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg'] Number of training images for beetroot : 88 Training samples for beetroot : ['Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_76.jpg'] Number of training images for grapes : 100 Training samples for grapes : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_3.jpg', 'Image_76.jpg'] Number of training images for corn : 87 Training samples for corn : ['Image_84.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_7.jpg'] Number of training images for cauliflower : 79 Training samples for cauliflower : ['Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg', 'Image_76.jpg', 'Image_7.jpg'] Number of training images for banana : 75 Training samples for banana : ['Image_84.jpg', 'Image_64.jpg', 'Image_1.jpg', 'Image_40.jpg', 'Image_3.jpg']
# Brief listing of test image files for each class
for c_label in CLASS_LABELS:
test_class_dir = os.path.join(VALID_DIR, c_label)
test_class_files = os.listdir(test_class_dir)
print('Number of test images for', c_label, ':', len(os.listdir(test_class_dir)))
print('Training samples for', c_label, ':')
print(test_class_files[:5],'\n')
Number of test images for onion : 10 Training samples for onion : ['Image_1.jpg', 'Image_3.jpg', 'Image_8.jpg', 'Image_5.jpg', 'Image_6.jpg'] Number of test images for tomato : 10 Training samples for tomato : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for turnip : 10 Training samples for turnip : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for apple : 10 Training samples for apple : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for pomegranate : 10 Training samples for pomegranate : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for sweetcorn : 10 Training samples for sweetcorn : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for ginger : 10 Training samples for ginger : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for peas : 10 Training samples for peas : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for lettuce : 9 Training samples for lettuce : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_6.jpg'] Number of test images for garlic : 10 Training samples for garlic : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for watermelon : 10 Training samples for watermelon : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for potato : 10 Training samples for potato : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_6.jpg'] Number of test images for paprika : 10 Training samples for paprika : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for eggplant : 10 Training samples for eggplant : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for carrot : 9 Training samples for carrot : ['Image_1.jpg', 'Image_3.jpg', 'Image_8.jpg', 'Image_5.jpg', 'Image_6.jpg'] Number of test images for bell pepper : 9 Training samples for bell pepper : ['Image_1.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg', 'Image_6.jpg'] Number of test images for sweetpotato : 10 Training samples for sweetpotato : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for jalepeno : 9 Training samples for jalepeno : ['Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg', 'Image_6.jpg'] Number of test images for orange : 9 Training samples for orange : ['Image_3.jpg', 'Image_7.jpg', 'Image_5.jpg', 'Image_6.jpg', 'Image_9.jpg'] Number of test images for pineapple : 10 Training samples for pineapple : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for soy beans : 10 Training samples for soy beans : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for lemon : 10 Training samples for lemon : ['Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg', 'Image_10.jpg'] Number of test images for kiwi : 10 Training samples for kiwi : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for chilli pepper : 9 Training samples for chilli pepper : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_10.jpg'] Number of test images for cucumber : 10 Training samples for cucumber : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for raddish : 9 Training samples for raddish : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_6.jpg'] Number of test images for cabbage : 10 Training samples for cabbage : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for spinach : 10 Training samples for spinach : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for mango : 10 Training samples for mango : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for pear : 10 Training samples for pear : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for capsicum : 10 Training samples for capsicum : ['Image_1.jpg', 'Image_7.jpg', 'Image_3.JPG', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for beetroot : 10 Training samples for beetroot : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for grapes : 9 Training samples for grapes : ['Image_1.jpg', 'Image_3.jpg', 'Image_8.jpg', 'Image_6.jpg', 'Image_10.jpg'] Number of test images for corn : 10 Training samples for corn : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for cauliflower : 10 Training samples for cauliflower : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_6.jpg'] Number of test images for banana : 9 Training samples for banana : ['Image_1.jpg', 'Image_3.jpg', 'Image_8.jpg', 'Image_5.jpg', 'Image_6.jpg']
# Plot some training images from the dataset
nrows = len(CLASS_LABELS)
ncols = 4
training_examples = []
example_labels = []
fig = plt.gcf()
fig.set_size_inches(ncols * 4, nrows * 3)
for c_label in CLASS_LABELS:
training_class_dir = os.path.join(TRAIN_DIR, c_label)
training_class_files = os.listdir(training_class_dir)
for j in range(ncols):
training_examples.append(training_class_dir + '/' + training_class_files[j])
example_labels.append(c_label)
# print(training_examples)
# print(example_labels)
for i, img_path in enumerate(training_examples):
# Set up subplot; subplot indices start at 1
sp = plt.subplot(nrows, ncols, i+1)
sp.text(0, 0, example_labels[i])
# sp.axis('Off')
img = mpimg.imread(img_path)
plt.imshow(img)
plt.show()
datagen_kwargs = dict(rescale=1./255)
training_datagen = ImageDataGenerator(**datagen_kwargs)
validation_datagen = ImageDataGenerator(**datagen_kwargs)
dataflow_kwargs = dict(class_mode="categorical")
do_data_augmentation = True
if do_data_augmentation:
training_datagen = ImageDataGenerator(rotation_range=45,
horizontal_flip=True,
vertical_flip=True,
**datagen_kwargs)
print('Loading and pre-processing the training images...')
training_generator = training_datagen.flow_from_directory(directory=TRAIN_DIR,
target_size=TARGET_IMAGE_SIZE,
batch_size=BATCH_SIZE,
shuffle=True,
seed=RNG_SEED,
**dataflow_kwargs)
print('Number of training image batches per epoch of modeling:', len(training_generator))
print('Loading and pre-processing the validation images...')
validation_generator = validation_datagen.flow_from_directory(directory=VALID_DIR,
target_size=TARGET_IMAGE_SIZE,
batch_size=BATCH_SIZE,
shuffle=False,
**dataflow_kwargs)
print('Number of validation image batches per epoch of modeling:', len(validation_generator))
Loading and pre-processing the training images... Found 3115 images belonging to 36 classes. Number of training image batches per epoch of modeling: 195 Loading and pre-processing the validation images... Found 351 images belonging to 36 classes. Number of validation image batches per epoch of modeling: 22
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 2 - Load and Prepare Images completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 3 - Define and Train Models has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
# Define the function for plotting training results for comparison
def plot_metrics(history):
fig, axs = plt.subplots(1, 2, figsize=(24, 15))
metrics = [TRAIN_LOSS, TRAIN_METRIC]
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color='blue', label='Train')
plt.plot(history.epoch, history.history['val_'+metric], color='red', linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == TRAIN_LOSS:
plt.ylim([0, plt.ylim()[1]])
else:
plt.ylim([0, 1])
plt.legend()
# Define the baseline model for benchmarking
def create_nn_model(input_param=INPUT_IMAGE_SHAPE, output_param=NUM_CLASSES, dense_nodes=2048,
classifier_activation=CLASSIFIER_ACTIVATION, loss_param=DEFAULT_LOSS,
opt_param=DEFAULT_OPTIMIZER, metrics_param=DEFAULT_METRICS):
base_model = keras.applications.MobileNetV3Large(include_top=False, weights='imagenet', input_shape=input_param)
nn_model = keras.models.Sequential()
nn_model.add(base_model)
nn_model.add(keras.layers.Flatten())
nn_model.add(keras.layers.Dense(dense_nodes, activation='relu')),
nn_model.add(keras.layers.Dense(output_param, activation=classifier_activation))
nn_model.compile(loss=loss_param, optimizer=opt_param, metrics=metrics_param)
return nn_model
# Initialize the neural network model and get the training results for plotting graph
start_time_module = datetime.now()
tf.keras.utils.set_random_seed(RNG_SEED)
baseline_model = create_nn_model()
baseline_model_history = baseline_model.fit(training_generator,
epochs=MAX_EPOCHS,
validation_data=validation_generator,
verbose=1)
print('Total time for model fitting:', (datetime.now() - start_time_module))
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/mobilenet_v3/weights_mobilenet_v3_large_224_1.0_float_no_top_v2.h5 12689408/12683000 [==============================] - 0s 0us/step 12697600/12683000 [==============================] - 0s 0us/step Epoch 1/40 11/195 [>.............................] - ETA: 1:41 - loss: 6.6334 - accuracy: 0.0936
/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:788: UserWarning: Corrupt EXIF data. Expecting to read 4 bytes but only got 0. warnings.warn(str(msg))
29/195 [===>..........................] - ETA: 1:51 - loss: 5.5550 - accuracy: 0.1590
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images "Palette images with Transparency expressed in bytes should be "
195/195 [==============================] - 181s 848ms/step - loss: 2.6545 - accuracy: 0.4559 - val_loss: 4.3681 - val_accuracy: 0.0342 Epoch 2/40 195/195 [==============================] - 164s 837ms/step - loss: 0.9473 - accuracy: 0.7307 - val_loss: 4.5425 - val_accuracy: 0.0285 Epoch 3/40 195/195 [==============================] - 163s 840ms/step - loss: 0.6559 - accuracy: 0.7981 - val_loss: 5.0276 - val_accuracy: 0.0256 Epoch 4/40 195/195 [==============================] - 164s 842ms/step - loss: 0.4987 - accuracy: 0.8472 - val_loss: 4.8915 - val_accuracy: 0.0285 Epoch 5/40 195/195 [==============================] - 165s 845ms/step - loss: 0.4411 - accuracy: 0.8687 - val_loss: 5.1992 - val_accuracy: 0.0199 Epoch 6/40 195/195 [==============================] - 165s 845ms/step - loss: 0.3319 - accuracy: 0.8950 - val_loss: 5.3715 - val_accuracy: 0.0399 Epoch 7/40 195/195 [==============================] - 165s 845ms/step - loss: 0.2786 - accuracy: 0.9091 - val_loss: 6.2729 - val_accuracy: 0.0513 Epoch 8/40 195/195 [==============================] - 165s 845ms/step - loss: 0.2350 - accuracy: 0.9246 - val_loss: 5.6170 - val_accuracy: 0.0655 Epoch 9/40 195/195 [==============================] - 165s 840ms/step - loss: 0.2168 - accuracy: 0.9316 - val_loss: 6.0162 - val_accuracy: 0.0513 Epoch 10/40 195/195 [==============================] - 164s 842ms/step - loss: 0.2071 - accuracy: 0.9364 - val_loss: 6.4347 - val_accuracy: 0.0598 Epoch 11/40 195/195 [==============================] - 166s 847ms/step - loss: 0.1760 - accuracy: 0.9413 - val_loss: 6.3125 - val_accuracy: 0.0826 Epoch 12/40 195/195 [==============================] - 164s 844ms/step - loss: 0.1631 - accuracy: 0.9512 - val_loss: 5.5198 - val_accuracy: 0.0826 Epoch 13/40 195/195 [==============================] - 165s 847ms/step - loss: 0.1664 - accuracy: 0.9499 - val_loss: 5.5987 - val_accuracy: 0.1054 Epoch 14/40 195/195 [==============================] - 165s 845ms/step - loss: 0.1296 - accuracy: 0.9621 - val_loss: 5.7017 - val_accuracy: 0.1481 Epoch 15/40 195/195 [==============================] - 165s 845ms/step - loss: 0.1332 - accuracy: 0.9608 - val_loss: 6.0367 - val_accuracy: 0.0997 Epoch 16/40 195/195 [==============================] - 164s 843ms/step - loss: 0.1320 - accuracy: 0.9573 - val_loss: 6.0748 - val_accuracy: 0.1168 Epoch 17/40 195/195 [==============================] - 164s 840ms/step - loss: 0.1467 - accuracy: 0.9567 - val_loss: 6.5917 - val_accuracy: 0.1083 Epoch 18/40 195/195 [==============================] - 164s 841ms/step - loss: 0.1150 - accuracy: 0.9663 - val_loss: 5.1816 - val_accuracy: 0.1795 Epoch 19/40 195/195 [==============================] - 165s 849ms/step - loss: 0.1132 - accuracy: 0.9634 - val_loss: 7.1357 - val_accuracy: 0.1339 Epoch 20/40 195/195 [==============================] - 165s 845ms/step - loss: 0.0981 - accuracy: 0.9708 - val_loss: 4.3822 - val_accuracy: 0.2621 Epoch 21/40 195/195 [==============================] - 165s 848ms/step - loss: 0.1030 - accuracy: 0.9679 - val_loss: 3.8613 - val_accuracy: 0.3048 Epoch 22/40 195/195 [==============================] - 165s 845ms/step - loss: 0.1212 - accuracy: 0.9644 - val_loss: 4.1590 - val_accuracy: 0.2963 Epoch 23/40 195/195 [==============================] - 165s 846ms/step - loss: 0.0939 - accuracy: 0.9682 - val_loss: 3.7734 - val_accuracy: 0.3647 Epoch 24/40 195/195 [==============================] - 165s 844ms/step - loss: 0.0802 - accuracy: 0.9743 - val_loss: 4.0756 - val_accuracy: 0.4160 Epoch 25/40 195/195 [==============================] - 165s 846ms/step - loss: 0.0965 - accuracy: 0.9701 - val_loss: 3.5073 - val_accuracy: 0.4644 Epoch 26/40 195/195 [==============================] - 164s 842ms/step - loss: 0.0968 - accuracy: 0.9692 - val_loss: 3.7692 - val_accuracy: 0.4074 Epoch 27/40 195/195 [==============================] - 164s 845ms/step - loss: 0.1036 - accuracy: 0.9685 - val_loss: 3.0685 - val_accuracy: 0.5157 Epoch 28/40 195/195 [==============================] - 163s 837ms/step - loss: 0.0670 - accuracy: 0.9788 - val_loss: 1.1183 - val_accuracy: 0.7493 Epoch 29/40 195/195 [==============================] - 163s 834ms/step - loss: 0.0774 - accuracy: 0.9740 - val_loss: 1.3942 - val_accuracy: 0.7009 Epoch 30/40 195/195 [==============================] - 163s 838ms/step - loss: 0.0840 - accuracy: 0.9714 - val_loss: 1.2139 - val_accuracy: 0.7521 Epoch 31/40 195/195 [==============================] - 163s 838ms/step - loss: 0.0719 - accuracy: 0.9788 - val_loss: 1.8317 - val_accuracy: 0.6553 Epoch 32/40 195/195 [==============================] - 163s 838ms/step - loss: 0.0962 - accuracy: 0.9685 - val_loss: 0.8982 - val_accuracy: 0.8519 Epoch 33/40 195/195 [==============================] - 164s 840ms/step - loss: 0.0729 - accuracy: 0.9766 - val_loss: 0.8852 - val_accuracy: 0.8262 Epoch 34/40 195/195 [==============================] - 164s 837ms/step - loss: 0.0891 - accuracy: 0.9734 - val_loss: 0.8786 - val_accuracy: 0.8376 Epoch 35/40 195/195 [==============================] - 164s 840ms/step - loss: 0.0838 - accuracy: 0.9721 - val_loss: 0.6779 - val_accuracy: 0.8689 Epoch 36/40 195/195 [==============================] - 164s 840ms/step - loss: 0.0893 - accuracy: 0.9737 - val_loss: 0.2644 - val_accuracy: 0.9459 Epoch 37/40 195/195 [==============================] - 164s 842ms/step - loss: 0.0527 - accuracy: 0.9830 - val_loss: 0.2723 - val_accuracy: 0.9459 Epoch 38/40 195/195 [==============================] - 164s 840ms/step - loss: 0.0638 - accuracy: 0.9756 - val_loss: 0.2518 - val_accuracy: 0.9459 Epoch 39/40 195/195 [==============================] - 164s 842ms/step - loss: 0.0824 - accuracy: 0.9750 - val_loss: 0.3149 - val_accuracy: 0.9402 Epoch 40/40 195/195 [==============================] - 164s 839ms/step - loss: 0.0673 - accuracy: 0.9769 - val_loss: 0.3349 - val_accuracy: 0.9373 Total time for model fitting: 1:49:53.803218
baseline_model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
MobilenetV3large (Functiona (None, 7, 7, 960) 2996352
l)
flatten (Flatten) (None, 47040) 0
dense (Dense) (None, 2048) 96339968
dense_1 (Dense) (None, 36) 73764
=================================================================
Total params: 99,410,084
Trainable params: 99,385,684
Non-trainable params: 24,400
_________________________________________________________________
plot_metrics(baseline_model_history)
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 3 - Define and Train Models completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 4 - Tune and Optimize Models has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
# Initialize the neural network model and get the training results for plotting graph
start_time_module = datetime.now()
TUNING_LR = INITIAL_LR / 2
TUNE_OPTIMIZER = tf.keras.optimizers.Adam(learning_rate=TUNING_LR)
MINIMUM_LR = TUNING_LR / 4
learning_rate_reduction = ReduceLROnPlateau(monitor='val_accuracy', patience=3, verbose=1, factor=0.5, min_lr=MINIMUM_LR)
tf.keras.utils.set_random_seed(RNG_SEED)
tune_model = create_nn_model(opt_param=TUNE_OPTIMIZER)
tune_model_history = tune_model.fit(training_generator,
epochs=MAX_EPOCHS,
validation_data=validation_generator,
callbacks=[learning_rate_reduction],
verbose=1)
print('Total time for model fitting:', (datetime.now() - start_time_module))
Epoch 1/40 3/195 [..............................] - ETA: 2:23 - loss: 6.4887 - accuracy: 0.0417
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images "Palette images with Transparency expressed in bytes should be "
27/195 [===>..........................] - ETA: 2:00 - loss: 4.9248 - accuracy: 0.1429
/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:788: UserWarning: Corrupt EXIF data. Expecting to read 4 bytes but only got 0. warnings.warn(str(msg))
195/195 [==============================] - 171s 849ms/step - loss: 2.5815 - accuracy: 0.4456 - val_loss: 4.1364 - val_accuracy: 0.0199 - lr: 5.0000e-05 Epoch 2/40 195/195 [==============================] - 164s 840ms/step - loss: 1.1323 - accuracy: 0.6986 - val_loss: 4.3445 - val_accuracy: 0.0199 - lr: 5.0000e-05 Epoch 3/40 195/195 [==============================] - 164s 840ms/step - loss: 0.7889 - accuracy: 0.7650 - val_loss: 4.7157 - val_accuracy: 0.0256 - lr: 5.0000e-05 Epoch 4/40 195/195 [==============================] - 164s 838ms/step - loss: 0.6134 - accuracy: 0.8215 - val_loss: 4.7531 - val_accuracy: 0.0199 - lr: 5.0000e-05 Epoch 5/40 195/195 [==============================] - 164s 843ms/step - loss: 0.4750 - accuracy: 0.8594 - val_loss: 5.1761 - val_accuracy: 0.0171 - lr: 5.0000e-05 Epoch 6/40 195/195 [==============================] - 164s 843ms/step - loss: 0.4102 - accuracy: 0.8835 - val_loss: 4.9147 - val_accuracy: 0.0285 - lr: 5.0000e-05 Epoch 7/40 195/195 [==============================] - 164s 841ms/step - loss: 0.3366 - accuracy: 0.9024 - val_loss: 5.2933 - val_accuracy: 0.0256 - lr: 5.0000e-05 Epoch 8/40 195/195 [==============================] - 164s 842ms/step - loss: 0.3290 - accuracy: 0.9066 - val_loss: 5.6377 - val_accuracy: 0.0285 - lr: 5.0000e-05 Epoch 9/40 195/195 [==============================] - 164s 841ms/step - loss: 0.2245 - accuracy: 0.9262 - val_loss: 6.1526 - val_accuracy: 0.0342 - lr: 5.0000e-05 Epoch 10/40 195/195 [==============================] - 164s 840ms/step - loss: 0.2300 - accuracy: 0.9281 - val_loss: 6.0373 - val_accuracy: 0.0513 - lr: 5.0000e-05 Epoch 11/40 195/195 [==============================] - 164s 840ms/step - loss: 0.1954 - accuracy: 0.9390 - val_loss: 5.6350 - val_accuracy: 0.0456 - lr: 5.0000e-05 Epoch 12/40 195/195 [==============================] - 164s 841ms/step - loss: 0.1874 - accuracy: 0.9416 - val_loss: 5.3027 - val_accuracy: 0.0741 - lr: 5.0000e-05 Epoch 13/40 195/195 [==============================] - 164s 842ms/step - loss: 0.1929 - accuracy: 0.9422 - val_loss: 5.7037 - val_accuracy: 0.0826 - lr: 5.0000e-05 Epoch 14/40 195/195 [==============================] - 164s 841ms/step - loss: 0.1725 - accuracy: 0.9438 - val_loss: 5.4724 - val_accuracy: 0.0826 - lr: 5.0000e-05 Epoch 15/40 195/195 [==============================] - 164s 839ms/step - loss: 0.1317 - accuracy: 0.9573 - val_loss: 5.0873 - val_accuracy: 0.1254 - lr: 5.0000e-05 Epoch 16/40 195/195 [==============================] - 163s 836ms/step - loss: 0.1401 - accuracy: 0.9589 - val_loss: 4.1553 - val_accuracy: 0.1738 - lr: 5.0000e-05 Epoch 17/40 195/195 [==============================] - 164s 838ms/step - loss: 0.1208 - accuracy: 0.9637 - val_loss: 4.5098 - val_accuracy: 0.2393 - lr: 5.0000e-05 Epoch 18/40 195/195 [==============================] - 163s 839ms/step - loss: 0.1365 - accuracy: 0.9563 - val_loss: 3.5951 - val_accuracy: 0.2934 - lr: 5.0000e-05 Epoch 19/40 195/195 [==============================] - 163s 835ms/step - loss: 0.1280 - accuracy: 0.9579 - val_loss: 3.1856 - val_accuracy: 0.3447 - lr: 5.0000e-05 Epoch 20/40 195/195 [==============================] - 163s 838ms/step - loss: 0.1240 - accuracy: 0.9631 - val_loss: 3.3382 - val_accuracy: 0.3504 - lr: 5.0000e-05 Epoch 21/40 195/195 [==============================] - 164s 838ms/step - loss: 0.1293 - accuracy: 0.9579 - val_loss: 3.2034 - val_accuracy: 0.3989 - lr: 5.0000e-05 Epoch 22/40 195/195 [==============================] - 164s 840ms/step - loss: 0.1209 - accuracy: 0.9644 - val_loss: 3.0196 - val_accuracy: 0.4444 - lr: 5.0000e-05 Epoch 23/40 195/195 [==============================] - 164s 838ms/step - loss: 0.0793 - accuracy: 0.9740 - val_loss: 2.5490 - val_accuracy: 0.5157 - lr: 5.0000e-05 Epoch 24/40 195/195 [==============================] - 164s 842ms/step - loss: 0.1061 - accuracy: 0.9705 - val_loss: 2.6239 - val_accuracy: 0.5157 - lr: 5.0000e-05 Epoch 25/40 195/195 [==============================] - 164s 840ms/step - loss: 0.0769 - accuracy: 0.9737 - val_loss: 2.5335 - val_accuracy: 0.5128 - lr: 5.0000e-05 Epoch 26/40 195/195 [==============================] - 164s 840ms/step - loss: 0.1038 - accuracy: 0.9685 - val_loss: 2.0553 - val_accuracy: 0.5641 - lr: 5.0000e-05 Epoch 27/40 195/195 [==============================] - 164s 841ms/step - loss: 0.1016 - accuracy: 0.9692 - val_loss: 1.5230 - val_accuracy: 0.6439 - lr: 5.0000e-05 Epoch 28/40 195/195 [==============================] - 164s 841ms/step - loss: 0.0824 - accuracy: 0.9746 - val_loss: 1.4390 - val_accuracy: 0.6638 - lr: 5.0000e-05 Epoch 29/40 195/195 [==============================] - 164s 839ms/step - loss: 0.0756 - accuracy: 0.9769 - val_loss: 1.0084 - val_accuracy: 0.7379 - lr: 5.0000e-05 Epoch 30/40 195/195 [==============================] - 164s 835ms/step - loss: 0.0643 - accuracy: 0.9788 - val_loss: 0.8001 - val_accuracy: 0.8063 - lr: 5.0000e-05 Epoch 31/40 195/195 [==============================] - 164s 839ms/step - loss: 0.0777 - accuracy: 0.9782 - val_loss: 0.7335 - val_accuracy: 0.8034 - lr: 5.0000e-05 Epoch 32/40 195/195 [==============================] - 164s 838ms/step - loss: 0.0570 - accuracy: 0.9791 - val_loss: 0.6549 - val_accuracy: 0.8547 - lr: 5.0000e-05 Epoch 33/40 195/195 [==============================] - 163s 835ms/step - loss: 0.0668 - accuracy: 0.9814 - val_loss: 0.5079 - val_accuracy: 0.8946 - lr: 5.0000e-05 Epoch 34/40 195/195 [==============================] - 163s 836ms/step - loss: 0.0826 - accuracy: 0.9753 - val_loss: 0.4574 - val_accuracy: 0.9145 - lr: 5.0000e-05 Epoch 35/40 195/195 [==============================] - 163s 835ms/step - loss: 0.0764 - accuracy: 0.9772 - val_loss: 0.3933 - val_accuracy: 0.9117 - lr: 5.0000e-05 Epoch 36/40 195/195 [==============================] - 164s 839ms/step - loss: 0.0766 - accuracy: 0.9782 - val_loss: 0.3075 - val_accuracy: 0.9487 - lr: 5.0000e-05 Epoch 37/40 195/195 [==============================] - 163s 839ms/step - loss: 0.0692 - accuracy: 0.9785 - val_loss: 0.3797 - val_accuracy: 0.9231 - lr: 5.0000e-05 Epoch 38/40 195/195 [==============================] - 163s 839ms/step - loss: 0.0676 - accuracy: 0.9795 - val_loss: 0.2731 - val_accuracy: 0.9630 - lr: 5.0000e-05 Epoch 39/40 195/195 [==============================] - 163s 838ms/step - loss: 0.0742 - accuracy: 0.9762 - val_loss: 0.3075 - val_accuracy: 0.9573 - lr: 5.0000e-05 Epoch 40/40 195/195 [==============================] - 163s 837ms/step - loss: 0.0747 - accuracy: 0.9750 - val_loss: 0.3003 - val_accuracy: 0.9544 - lr: 5.0000e-05 Total time for model fitting: 1:49:15.693520
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 4 - Tune and Optimize Models completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 5 - Finalize Model and Make Predictions has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
FINAL_LR = 0.00005
FINAL_OPTIMIZER = tf.keras.optimizers.Adam(learning_rate=FINAL_LR)
FINAL_EPOCHS = MAX_EPOCHS
tf.keras.utils.set_random_seed(RNG_SEED)
final_model = create_nn_model(opt_param=FINAL_OPTIMIZER)
final_model.fit(training_generator, epochs=FINAL_EPOCHS, verbose=1)
final_model.summary()
Epoch 1/40
/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:788: UserWarning: Corrupt EXIF data. Expecting to read 4 bytes but only got 0. warnings.warn(str(msg))
4/195 [..............................] - ETA: 2:44 - loss: 6.4071 - accuracy: 0.0312
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images "Palette images with Transparency expressed in bytes should be "
195/195 [==============================] - 151s 745ms/step - loss: 2.6559 - accuracy: 0.4430
Epoch 2/40
195/195 [==============================] - 145s 744ms/step - loss: 1.1110 - accuracy: 0.7024
Epoch 3/40
195/195 [==============================] - 144s 737ms/step - loss: 0.7758 - accuracy: 0.7750
Epoch 4/40
195/195 [==============================] - 145s 741ms/step - loss: 0.6256 - accuracy: 0.8244
Epoch 5/40
195/195 [==============================] - 145s 743ms/step - loss: 0.5174 - accuracy: 0.8510
Epoch 6/40
195/195 [==============================] - 145s 741ms/step - loss: 0.4171 - accuracy: 0.8726
Epoch 7/40
195/195 [==============================] - 145s 743ms/step - loss: 0.3637 - accuracy: 0.8925
Epoch 8/40
195/195 [==============================] - 145s 740ms/step - loss: 0.2902 - accuracy: 0.9104
Epoch 9/40
195/195 [==============================] - 145s 742ms/step - loss: 0.2753 - accuracy: 0.9156
Epoch 10/40
195/195 [==============================] - 144s 736ms/step - loss: 0.2367 - accuracy: 0.9329
Epoch 11/40
195/195 [==============================] - 145s 743ms/step - loss: 0.1892 - accuracy: 0.9409
Epoch 12/40
195/195 [==============================] - 144s 739ms/step - loss: 0.1977 - accuracy: 0.9384
Epoch 13/40
195/195 [==============================] - 144s 739ms/step - loss: 0.1773 - accuracy: 0.9483
Epoch 14/40
195/195 [==============================] - 145s 740ms/step - loss: 0.1448 - accuracy: 0.9528
Epoch 15/40
195/195 [==============================] - 145s 745ms/step - loss: 0.1696 - accuracy: 0.9531
Epoch 16/40
195/195 [==============================] - 145s 742ms/step - loss: 0.1380 - accuracy: 0.9579
Epoch 17/40
195/195 [==============================] - 145s 738ms/step - loss: 0.1257 - accuracy: 0.9618
Epoch 18/40
195/195 [==============================] - 145s 741ms/step - loss: 0.1161 - accuracy: 0.9660
Epoch 19/40
195/195 [==============================] - 145s 744ms/step - loss: 0.1284 - accuracy: 0.9599
Epoch 20/40
195/195 [==============================] - 145s 741ms/step - loss: 0.1456 - accuracy: 0.9547
Epoch 21/40
195/195 [==============================] - 144s 741ms/step - loss: 0.1217 - accuracy: 0.9650
Epoch 22/40
195/195 [==============================] - 144s 739ms/step - loss: 0.1152 - accuracy: 0.9631
Epoch 23/40
195/195 [==============================] - 144s 737ms/step - loss: 0.0946 - accuracy: 0.9701
Epoch 24/40
195/195 [==============================] - 144s 740ms/step - loss: 0.1052 - accuracy: 0.9663
Epoch 25/40
195/195 [==============================] - 144s 739ms/step - loss: 0.0926 - accuracy: 0.9705
Epoch 26/40
195/195 [==============================] - 144s 739ms/step - loss: 0.0707 - accuracy: 0.9762
Epoch 27/40
195/195 [==============================] - 144s 739ms/step - loss: 0.0926 - accuracy: 0.9714
Epoch 28/40
195/195 [==============================] - 144s 741ms/step - loss: 0.0852 - accuracy: 0.9746
Epoch 29/40
195/195 [==============================] - 144s 740ms/step - loss: 0.0857 - accuracy: 0.9740
Epoch 30/40
195/195 [==============================] - 144s 736ms/step - loss: 0.0798 - accuracy: 0.9717
Epoch 31/40
195/195 [==============================] - 144s 740ms/step - loss: 0.0843 - accuracy: 0.9746
Epoch 32/40
195/195 [==============================] - 144s 740ms/step - loss: 0.0838 - accuracy: 0.9766
Epoch 33/40
195/195 [==============================] - 144s 739ms/step - loss: 0.0662 - accuracy: 0.9795
Epoch 34/40
195/195 [==============================] - 144s 738ms/step - loss: 0.0708 - accuracy: 0.9756
Epoch 35/40
195/195 [==============================] - 144s 740ms/step - loss: 0.0635 - accuracy: 0.9785
Epoch 36/40
195/195 [==============================] - 144s 739ms/step - loss: 0.0578 - accuracy: 0.9820
Epoch 37/40
195/195 [==============================] - 144s 738ms/step - loss: 0.0881 - accuracy: 0.9746
Epoch 38/40
195/195 [==============================] - 144s 738ms/step - loss: 0.0783 - accuracy: 0.9737
Epoch 39/40
195/195 [==============================] - 144s 738ms/step - loss: 0.0777 - accuracy: 0.9727
Epoch 40/40
195/195 [==============================] - 144s 738ms/step - loss: 0.0705 - accuracy: 0.9756
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
MobilenetV3large (Functiona (None, 7, 7, 960) 2996352
l)
flatten_2 (Flatten) (None, 47040) 0
dense_4 (Dense) (None, 2048) 96339968
dense_5 (Dense) (None, 36) 73764
=================================================================
Total params: 99,410,084
Trainable params: 99,385,684
Non-trainable params: 24,400
_________________________________________________________________
# Brief listing of test image files for each class
for c_label in CLASS_LABELS:
test_class_dir = os.path.join(TEST_DIR, c_label)
test_class_files = os.listdir(test_class_dir)
print('Number of test images for', c_label, ':', len(os.listdir(test_class_dir)))
print('Training samples for', c_label, ':')
print(test_class_files[:5],'\n')
Number of test images for onion : 10 Training samples for onion : ['Image_1.jpg', 'Image_3.jpg', 'Image_8.jpg', 'Image_5.jpg', 'Image_6.jpg'] Number of test images for tomato : 10 Training samples for tomato : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for turnip : 10 Training samples for turnip : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for apple : 10 Training samples for apple : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for pomegranate : 10 Training samples for pomegranate : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for sweetcorn : 10 Training samples for sweetcorn : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for ginger : 10 Training samples for ginger : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for peas : 10 Training samples for peas : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for lettuce : 10 Training samples for lettuce : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_6.jpg'] Number of test images for garlic : 10 Training samples for garlic : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for watermelon : 10 Training samples for watermelon : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for potato : 10 Training samples for potato : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_6.jpg'] Number of test images for paprika : 10 Training samples for paprika : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for eggplant : 10 Training samples for eggplant : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for carrot : 10 Training samples for carrot : ['Image_1.jpg', 'Image_3.jpg', 'Image_8.jpg', 'Image_5.jpg', 'Image_6.jpg'] Number of test images for bell pepper : 10 Training samples for bell pepper : ['Image_1.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg', 'Image_6.jpg'] Number of test images for sweetpotato : 10 Training samples for sweetpotato : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for jalepeno : 10 Training samples for jalepeno : ['Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg', 'Image_6.jpg'] Number of test images for orange : 10 Training samples for orange : ['Image_3.jpg', 'Image_7.jpg', 'Image_8.jpeg', 'Image_5.jpg', 'Image_6.jpg'] Number of test images for pineapple : 10 Training samples for pineapple : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for soy beans : 10 Training samples for soy beans : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for lemon : 10 Training samples for lemon : ['Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg', 'Image_10.jpg'] Number of test images for kiwi : 10 Training samples for kiwi : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for chilli pepper : 10 Training samples for chilli pepper : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_6.jpeg'] Number of test images for cucumber : 10 Training samples for cucumber : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for raddish : 10 Training samples for raddish : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_6.jpg'] Number of test images for cabbage : 10 Training samples for cabbage : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for spinach : 10 Training samples for spinach : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for mango : 10 Training samples for mango : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for pear : 10 Training samples for pear : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for capsicum : 10 Training samples for capsicum : ['Image_1.jpg', 'Image_7.jpg', 'Image_3.JPG', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for beetroot : 10 Training samples for beetroot : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for grapes : 10 Training samples for grapes : ['Image_1.jpg', 'Image_3.jpg', 'Image_8.jpg', 'Image_6.jpg', 'Image_10.jpg'] Number of test images for corn : 10 Training samples for corn : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_5.jpg'] Number of test images for cauliflower : 10 Training samples for cauliflower : ['Image_1.jpg', 'Image_3.jpg', 'Image_7.jpg', 'Image_8.jpg', 'Image_6.jpg'] Number of test images for banana : 9 Training samples for banana : ['Image_1.jpg', 'Image_3.jpg', 'Image_8.jpg', 'Image_5.jpg', 'Image_6.jpg']
datagen_kwargs = dict(rescale=1./255)
test_datagen = ImageDataGenerator(**datagen_kwargs)
dataflow_kwargs = dict(class_mode="categorical")
print('Loading and pre-processing the test images...')
test_generator = validation_datagen.flow_from_directory(directory=TEST_DIR,
target_size=TARGET_IMAGE_SIZE,
batch_size=BATCH_SIZE,
shuffle=False,
**dataflow_kwargs)
print('Number of test image batches per epoch of modeling:', len(test_generator))
Loading and pre-processing the test images... Found 359 images belonging to 36 classes. Number of test image batches per epoch of modeling: 23
# Print the labels used for the modeling
print(test_generator.class_indices)
{'apple': 0, 'banana': 1, 'beetroot': 2, 'bell pepper': 3, 'cabbage': 4, 'capsicum': 5, 'carrot': 6, 'cauliflower': 7, 'chilli pepper': 8, 'corn': 9, 'cucumber': 10, 'eggplant': 11, 'garlic': 12, 'ginger': 13, 'grapes': 14, 'jalepeno': 15, 'kiwi': 16, 'lemon': 17, 'lettuce': 18, 'mango': 19, 'onion': 20, 'orange': 21, 'paprika': 22, 'pear': 23, 'peas': 24, 'pineapple': 25, 'pomegranate': 26, 'potato': 27, 'raddish': 28, 'soy beans': 29, 'spinach': 30, 'sweetcorn': 31, 'sweetpotato': 32, 'tomato': 33, 'turnip': 34, 'watermelon': 35}
final_model.evaluate(test_generator, verbose=1)
13/23 [===============>..............] - ETA: 8s - loss: 0.2699 - accuracy: 0.9471
/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:788: UserWarning: Corrupt EXIF data. Expecting to read 4 bytes but only got 0. warnings.warn(str(msg))
23/23 [==============================] - 19s 802ms/step - loss: 0.2658 - accuracy: 0.9304
[0.26583296060562134, 0.9303621053695679]
test_pred = final_model.predict(test_generator)
test_predictions = np.argmax(test_pred, axis=-1)
test_original = test_generator.labels
print('Accuracy Score:', accuracy_score(test_original, test_predictions))
print(confusion_matrix(test_original, test_predictions))
print(classification_report(test_original, test_predictions))
/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:788: UserWarning: Corrupt EXIF data. Expecting to read 4 bytes but only got 0. warnings.warn(str(msg))
Accuracy Score: 0.9303621169916435
[[ 7 1 0 ... 0 0 0]
[ 0 7 0 ... 0 0 0]
[ 0 0 10 ... 0 0 0]
...
[ 0 0 0 ... 10 0 0]
[ 0 0 0 ... 0 10 0]
[ 0 0 0 ... 0 0 10]]
precision recall f1-score support
0 1.00 0.70 0.82 10
1 0.88 0.78 0.82 9
2 1.00 1.00 1.00 10
3 0.82 0.90 0.86 10
4 0.77 1.00 0.87 10
5 0.82 0.90 0.86 10
6 1.00 0.90 0.95 10
7 1.00 1.00 1.00 10
8 1.00 1.00 1.00 10
9 0.82 0.90 0.86 10
10 1.00 1.00 1.00 10
11 0.91 1.00 0.95 10
12 1.00 1.00 1.00 10
13 1.00 1.00 1.00 10
14 1.00 1.00 1.00 10
15 1.00 1.00 1.00 10
16 1.00 1.00 1.00 10
17 0.71 1.00 0.83 10
18 1.00 1.00 1.00 10
19 0.89 0.80 0.84 10
20 1.00 1.00 1.00 10
21 0.80 0.80 0.80 10
22 0.83 1.00 0.91 10
23 1.00 1.00 1.00 10
24 0.91 1.00 0.95 10
25 1.00 1.00 1.00 10
26 1.00 1.00 1.00 10
27 0.80 0.80 0.80 10
28 1.00 1.00 1.00 10
29 0.90 0.90 0.90 10
30 1.00 0.70 0.82 10
31 0.89 0.80 0.84 10
32 1.00 0.60 0.75 10
33 1.00 1.00 1.00 10
34 1.00 1.00 1.00 10
35 1.00 1.00 1.00 10
accuracy 0.93 359
macro avg 0.94 0.93 0.93 359
weighted avg 0.94 0.93 0.93 359
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 5 - Finalize Model and Make Predictions completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
print ('Total time for the script:',(datetime.now() - START_TIME_SCRIPT))
Total time for the script: 6:46:07.735870